# Chinese text generation
Tencent.hunyuan A13B Instruct GGUF
The quantized version of Tencent Hunyuan A13B Instruction Model, which uses technical means to improve operational efficiency while ensuring performance.
Large Language Model
T
DevQuasar
402
1
Deepseek R1 0528 Qwen3 8B 6bit
MIT
A 6-bit quantized version converted from the DeepSeek-R1-0528-Qwen3-8B model, suitable for text generation tasks in the MLX framework.
Large Language Model
D
mlx-community
582
1
Deepseek R1 0528 Qwen3 8B 4bit
MIT
This model is a 4-bit quantized version converted from DeepSeek-R1-0528-Qwen3-8B, optimized for the MLX framework and suitable for text generation tasks.
Large Language Model
D
mlx-community
924
1
Deepseek R1 0528 4bit
DeepSeek-R1-0528-4bit is a 4-bit quantized model converted from DeepSeek-R1-0528, optimized for the MLX framework.
Large Language Model
D
mlx-community
157
9
Qwen3 235B A22B 3bit DWQ
Apache-2.0
This is a 3-bit deep quantized model converted from Qwen/Qwen3-235B-A22B, suitable for text generation tasks.
Large Language Model
Q
mlx-community
41
2
Qwen3 235B A22B 4bit DWQ
Apache-2.0
Qwen3-235B-A22B-4bit-DWQ is a 4-bit quantized version converted from the Qwen3-235B-A22B-8bit model, suitable for text generation tasks.
Large Language Model
Q
mlx-community
70
1
Qwen3 4B 4bit DWQ
Apache-2.0
This model is a 4-bit DWQ quantized version of Qwen3-4B, converted to the MLX format for easy text generation using the mlx library.
Large Language Model
Q
mlx-community
517
2
Qwen3 30B A3B 4bit DWQ 05082025
Apache-2.0
This is a 4-bit quantized model converted from Qwen/Qwen3-30B-A3B to MLX format, suitable for text generation tasks.
Large Language Model
Q
mlx-community
240
5
Qwen3 30B A3B 4bit DWQ 0508
Apache-2.0
Qwen3-30B-A3B-4bit-DWQ-0508 is a 4-bit quantized model converted from Qwen/Qwen3-30B-A3B to MLX format, suitable for text generation tasks.
Large Language Model
Q
mlx-community
410
12
Qwen3 14B 4bit AWQ
Apache-2.0
Qwen3-14B-4bit-AWQ is an MLX-format model converted from Qwen/Qwen3-14B, using AWQ quantization technology to compress the model to 4bit, suitable for efficient inference on the MLX framework.
Large Language Model
Q
mlx-community
252
2
Qwen3 30B A3B 4bit DWQ
Apache-2.0
This is a 4-bit quantized version based on the Qwen3-30B-A3B model, created through custom DWQ quantization technology distilled from 6-bit to 4-bit, suitable for text generation tasks.
Large Language Model
Q
mlx-community
561
19
Qwen3 8B 4bit AWQ
Apache-2.0
Qwen3-8B-4bit-AWQ is a 4-bit AWQ quantized version converted from Qwen/Qwen3-8B, suitable for text generation tasks in the MLX framework.
Large Language Model
Q
mlx-community
1,682
1
Qwen3 235B A22B 4bit
Apache-2.0
This model is a 4-bit quantized version of Qwen/Qwen3-235B-A22B converted to MLX format, suitable for text generation tasks.
Large Language Model
Q
mlx-community
974
6
Qwen3 30B A3B 4bit
Apache-2.0
Qwen3-30B-A3B-4bit is a 4-bit quantized version converted from Qwen/Qwen3-30B-A3B, suitable for efficient text generation tasks under the MLX framework.
Large Language Model
Q
mlx-community
2,394
7
Qwen3 14B MLX 4bit
Apache-2.0
Qwen3-14B-4bit is a 4-bit quantized version of the Qwen/Qwen3-14B model converted using mlx-lm, suitable for text generation tasks.
Large Language Model
Q
lmstudio-community
3,178
4
Gemma 3 4b Pt Q4 0 GGUF
This is a GGUF format model converted from Google's Gemma 3.4B parameter model, suitable for text generation tasks.
Large Language Model
G
ngxson
74
1
Bloom 1b4 Zh
Openrail
Chinese language model developed based on bigscience/bloom-1b7 architecture with 1.4B parameters, reducing GPU memory usage through vocabulary compression
Large Language Model
Transformers Chinese

B
Langboat
5,157
18
Gpt2 Wechsel Chinese
MIT
A Chinese GPT-2 model trained using the WECHSEL method, achieving cross-lingual transfer of monolingual language models through effective initialization of subword embeddings.
Large Language Model
Transformers Chinese

G
benjamin
19
4
Featured Recommended AI Models